I heard about this course in the MBDP Metagenomics course last spring. I wish to develop my R skills further and learn more about RMarkdown, version control and using GitHub.
1 Link to my GitHub repository
2 or: https://github.com/mnpuputti/IODS-project
This week we practised data wrangling and data visualization for linear models in DataCamp with packages dplyr and ggplot2. We practised making scripts for data wrangling in R with “Learning2014” dataset. We used this dataset, modifed the data to match IODS-course needs, and we will call that part of the dataset “Students2014”. Dataset is available here.
Explore the “Students2014” dataset. This dataset contains part of the “the international survey of Approaches to Learning”
#Read data "students2014" into R from .csv-file created in the data wrangling part of week2.
students2014 <- read.csv("~/Documents/MAIJA/R_IODS/IODS-project/data/learning2014.csv", row.names = 1)
#Check dimensions and structure of the dataset.
dim(students2014)
## [1] 166 7
str(students2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
There are 166 observations and seven different variables including: gender, age, attitude, deep, stra, surf and points. There are three different types of learning technique questions: “deep” = deep learning,“stra” = strategic learning and “surf” = surface learning. In addition there were categories for global attitude towards statistics “attitude” , exam points “points” and also the “gender” of the survey participants.
Load the libraries needed in data visualization
library(ggplot2)
library(dplyr)
library(GGally)
Check the data through visualizations
###Check graphical overview with basic plot of pairwise correlations
p1 <- plot(students2014, main = "Graphical overview of the students2014 dataset")
##Check graphical overview with finer, more informative visual presentation of pairwise correlation plot, with ggpairs
ggpairs(students2014, mapping = aes(col = gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))
###Check summaries of the different variables available
summary(students2014)
## gender age attitude deep stra
## F:110 Min. :17.00 Min. :14.00 Min. :1.583 Min. :1.250
## M: 56 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333 1st Qu.:2.625
## Median :22.00 Median :32.00 Median :3.667 Median :3.188
## Mean :25.51 Mean :31.43 Mean :3.680 Mean :3.121
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083 3rd Qu.:3.625
## Max. :55.00 Max. :50.00 Max. :4.917 Max. :5.000
## surf points
## Min. :1.583 Min. : 7.00
## 1st Qu.:2.417 1st Qu.:19.00
## Median :2.833 Median :23.00
## Mean :2.787 Mean :22.72
## 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :4.333 Max. :33.00
#or summaries could be also individually picked e.g.
summary(students2014$gender)
## F M
## 110 56
From the graphical overview and summary we can see that there are more female participants in the gender category. Female participants are younger than male participants. Exam point means are on the similar level for both male and female participants. Male participants score better for the attitude points. There are no major correlations between any categories for either female or male participants.
Highest correlation with the “point” variable is observed with the variable “attitude” and “stra” and negative correlation *with “surf” variable“.* The mean age is 25.51 with range from 17.00 to 55.00 years. The mean observed exam points is 22.72 with minimum being 7.00 and maximum points being 33.00. Deep learning mean is 3.68, strategic learning mean is 3.121 and surface learning techniques mean is 2.787.
#Create a regression model with three explanatory variables identified from pairwise correlation plot:stra + age + attitude,
model1 <- lm(points ~ stra + age + attitude, data = students2014)
#Check the summary of the regression model and plot the results
set.seed(123)
summary(model1)
##
## Call:
## lm(formula = points ~ stra + age + attitude, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -18.1149 -3.2003 0.3303 3.4129 10.7599
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.89543 2.64834 4.114 6.17e-05 ***
## stra 1.00371 0.53434 1.878 0.0621 .
## age -0.08822 0.05302 -1.664 0.0981 .
## attitude 0.34808 0.05622 6.191 4.72e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.26 on 162 degrees of freedom
## Multiple R-squared: 0.2182, Adjusted R-squared: 0.2037
## F-statistic: 15.07 on 3 and 162 DF, p-value: 1.07e-08
Summary shows that only “attitude” of the tested explanatory variables could be considered statistically significant based on the p-values. Attitude has the most significant impact on the points with positive p.value 4.72e-09, while stra is showing p-value 0.0621 and age 0.0981. Attitude shows estimate of 0.34, while estimate for stra is 1.0. For age variable the estimate is negative. We can continue with the “attitude” variable and re-fit the model without the non-significant variables.
#Re-fit the model with significant variable "attitude"
model2 <- lm(points ~ attitude, data = students2014)
#Check the summary of the regression model and plot the results
set.seed(123)
summary(model2)
##
## Call:
## lm(formula = points ~ attitude, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
Now we can evaluate, whether the model and the data fit. The R-squared values tell us how close to the fitted (multiple) regression line our data is located. Multiple R-squared is 0.1906. This means that our model explains ~ 19 % of the variability around its mean. Estimates for the parametre for the model is 0.35255 for the attitude with standard error of 0.05674.
#Check how does the model looks with "points and "attitude"
qplot(attitude, points, data = students2014) + geom_smooth(method = "lm")
#Draw and explore the diagnostic plots
par(mfrow = c(2,2))
plot(model2, which = c(1,2,5))
Residuals are the errors in modelling the target variable. We want to minimize the modelling errors. Residuals vs fitted plot: There cannot be detected major spreading of the values. However, few outliers can be detected when fitted values increase. Normal QQ-plot explores the normal distribution potential of the errors. The data fits the normal distribution reasonably well, except the few outliers at the lower quantiles. We can say that the errors are normally distributed in this model. Residuals vs Leverage plot shows that there are no clear individual values that impact the model.
This dataset contain data on student achievements in two Portuguese schools. We look at how alcohol consumption habits effects student performance.
Dataset student.zip-file can be found here. and information on dataset background here.
Let’s load needed libraries
library(dplyr)
library(ggplot2)
library(tidyr)
Let’s read in the data from our data wrangling part. Alternatively, the modified dataset can be downloaded from here
alc <- read.csv("~/Documents/MAIJA/R_IODS/IODS-project/data/alc.csv", row.names = 1)
#Let's check the variables:
names(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
#or use glimpse
glimpse(alc)
## Observations: 382
## Variables: 35
## $ school <fct> GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, G…
## $ sex <fct> F, F, F, F, F, M, M, F, M, M, F, F, M, M, M, F, F, F,…
## $ age <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 1…
## $ address <fct> U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U,…
## $ famsize <fct> GT3, GT3, LE3, GT3, GT3, LE3, LE3, GT3, LE3, GT3, GT3…
## $ Pstatus <fct> A, T, T, T, T, T, T, A, A, T, T, T, T, T, A, T, T, T,…
## $ Medu <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3,…
## $ Fedu <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3,…
## $ Mjob <fct> at_home, at_home, at_home, health, other, services, o…
## $ Fjob <fct> teacher, other, other, services, other, other, other,…
## $ reason <fct> course, course, other, home, home, reputation, home, …
## $ nursery <fct> yes, no, yes, yes, yes, yes, yes, yes, yes, yes, yes,…
## $ internet <fct> no, yes, yes, yes, no, yes, yes, no, yes, yes, yes, y…
## $ guardian <fct> mother, father, mother, mother, father, mother, mothe…
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3,…
## $ studytime <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2,…
## $ failures <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ schoolsup <fct> yes, no, yes, no, no, no, no, yes, no, no, no, no, no…
## $ famsup <fct> no, yes, no, yes, yes, yes, no, yes, yes, yes, yes, y…
## $ paid <fct> no, no, yes, yes, yes, yes, no, no, yes, yes, yes, no…
## $ activities <fct> no, no, no, yes, no, yes, no, no, no, yes, no, yes, y…
## $ higher <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes…
## $ romantic <fct> no, no, no, yes, no, no, no, no, no, no, no, no, no, …
## $ famrel <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5,…
## $ freetime <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3,…
## $ goout <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2,…
## $ Dalc <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ Walc <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1,…
## $ health <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4,…
## $ absences <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3,…
## $ G1 <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 1…
## $ G2 <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, …
## $ G3 <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12,…
## $ alc_use <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5…
## $ high_use <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE…
These parametres are fully explained at the data repository site. Key variables explained shortly: Dalc describes workday alcohol consumption, Walc weekend alcohol consumption, alc_use describes average alcohol use per day. All these three parametres have scale from 1 (very low) to 5 (very high use). High_use describes whether student uses alcohol more than twice a week.
Let’s see how the data looks in order to pick 4 interesting variables:
# draw a bar plot of each variable
gather(alc) %>% ggplot(aes(value)) + geom_bar(color="white",fill="pink") + facet_wrap("key", scales = "free")
Let’s pick variables: absences, school, age and sex and explore how they correlate with alcohol consumption. Our hypothesis is that males have higher rate in high use of alcohol than females. We can also hypothesize that high use of alcohol leads to more absences from school. We can test whether high alcohol use is linked to age. And we can see if there is a difference in culture towards alcohol use in these two different schools.
First, Let’s explore how students use alcohol in these two schools and whether sex can be associated with it.
plot1 <- ggplot(data = alc, aes(x=alc_use, fill=school))
plot1 + geom_bar() +
facet_wrap("sex")
Here we can see that school “GP” shows higher average counts for alcohol use compared to school “MS”. In addition, the female participants have higher counts of very low or low average alcohol consumption compared to males.
Let’s see whether this changes for high_use
plot2 <- ggplot(data = alc, aes(x=high_use, fill=school))
plot2 + geom_bar() +
facet_wrap("sex")
Also higher counts for high_alcohol consumption in school “GP” and male students. This confirms our hypothesis that male students consume more alcohol and that students in different schools may have different drinking culture and attitude towards drinking alcohol.
Let’s see what is the relationship with absences, high_alc usage and sex.
# plot1 explores the relationship of high_use and absences
plot3 <- ggplot(alc, aes(x = high_use, y = absences, col = sex))
# define the plot as a boxplot and draw it
plot3 + geom_boxplot() + ylab("absences") + ggtitle("Student absences by high alcohol consumption")
Here we can see that high use of alcohol increases the number of absences. Male students have more absences compared to female students. That is expected, since we already observed that male students have higher average alcohol usage.
# plot1 explores the relationship of high_use and absences with variable age
plot4 <- ggplot(alc, aes(x = high_use, y = absences))
# define the plot as a boxplot and draw it
plot4 + geom_boxplot(color="darkblue") + ylab("absences") + facet_wrap("age") + ggtitle("Student absences by high alcohol consumption and age") + theme_bw()
Here we can see that high alcohol consumption is most common for 17 year old students and that they increase the absences.
Let’s explore how high use of alcohol is linked to explanatory variables and use logistic regression
#We can now explore the relationship between four variables and high alcohol consumption with logistic regression
model1 <- glm(high_use ~ school+absences+sex+age, data = alc, family = "binomial")
#Let's study the summary of our fitted model
summary(model1)
##
## Call:
## glm(formula = high_use ~ school + absences + sex + age, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3081 -0.8357 -0.6384 1.0811 2.1077
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -4.29534 1.79304 -2.396 0.0166 *
## schoolMS 0.30068 0.39803 0.755 0.4500
## absences 0.09467 0.02342 4.042 5.31e-05 ***
## sexM 0.99332 0.24134 4.116 3.86e-05 ***
## age 0.14593 0.10850 1.345 0.1786
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 426.48 on 377 degrees of freedom
## AIC: 436.48
##
## Number of Fisher Scoring iterations: 4
Here we can see that age and school are not statistically significant for high_use of alcohol despite our previous interpretation through visualization. However, we have factor variables here for school and sex and they are demonstrated in the intercept. Let’s explore the model without school variable.
#Let's re-fit our model, add -1 to exclude interception and show coefficient estimate values straight
model2 <- glm(high_use ~ sex+age+absences, data = alc, family = "binomial")
#Let's study the summary of our fitted model
summary(model2)
##
## Call:
## glm(formula = high_use ~ sex + age + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3002 -0.8428 -0.6386 1.0781 2.1068
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -4.73160 1.69730 -2.788 0.00531 **
## sexM 0.98440 0.24074 4.089 4.33e-05 ***
## age 0.17516 0.10125 1.730 0.08365 .
## absences 0.09223 0.02314 3.986 6.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 427.04 on 378 degrees of freedom
## AIC: 435.04
##
## Number of Fisher Scoring iterations: 4
Now we can see that coefficient for female students is seen in the intercept: -4.7 and for male student the estimate changed and is -0.98
#Let's study the coefficients of our model
coef(model2)
## (Intercept) sexM age absences
## -4.73159988 0.98439673 0.17515570 0.09223477
#Next we can explore the coefficients and their confidence intervals as odd ratios
# We can compute odds ratios (oddrat)
oddrat <- coef(model2) %>% exp
# compute confidence intervals (confin)
confin <- confint(model2) %>% exp
#Npw we can print out our model's coefficients oddratios with their confidence intervals and join them
cbind(oddrat, confin)
## oddrat 2.5 % 97.5 %
## (Intercept) 0.008812361 0.0002990615 0.2356076
## sexM 2.676196933 1.6793970873 4.3232945
## age 1.191431709 0.9782468520 1.4562441
## absences 1.096622249 1.0500945691 1.1499088
Odd ratio describes the ratio of successes to failures. Here we can see that odds ratio for all explanatory variables is higher than 1 for all explanatory variables. This means that e.g. male students have 2.7 times probability to use high levels of alcohol than female students. Absences are likely to increase odds for high alcohol consumption.
Absences and sex were statistically significant as explanatory variables. We can now re-fit our model with these variables and explore the predictions.
# re-fit the model with statistically significant variables
model3 <- glm(high_use ~ absences + sex, data = alc, family = "binomial")
# predict() the probability of high_use
probabilities <- predict(model3, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probabilities>0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc, absences, sex, high_use, probability, prediction) %>% tail(10)
## absences sex high_use probability prediction
## 373 0 M FALSE 0.2976656 FALSE
## 374 7 M TRUE 0.4545495 FALSE
## 375 1 F FALSE 0.1493808 FALSE
## 376 6 F FALSE 0.2215747 FALSE
## 377 2 F FALSE 0.1620742 FALSE
## 378 2 F FALSE 0.1620742 FALSE
## 379 2 F FALSE 0.1620742 FALSE
## 380 3 F FALSE 0.1756235 FALSE
## 381 4 M TRUE 0.3841248 FALSE
## 382 2 M TRUE 0.3395595 FALSE
Here we can see that all our predictions remain false and true cases of high_use are not predicted correctly. Let’s further explore this visually.
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 258 10
## TRUE 88 26
There were 88 false predictions of actual high_use and 26 accurate predictions of high_use.
# initialize a plot of 'high_use' versus 'probability' in 'alc'
g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
# define the geom as points and draw the plot
g + geom_point()
Here we can see that the rate for true prediction increases, when probability reaches 0.5.
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table %>% addmargins
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.67539267 0.02617801 0.70157068
## TRUE 0.23036649 0.06806283 0.29842932
## Sum 0.90575916 0.09424084 1.00000000
# the logistic regression model m and dataset alc (with predictions) are available
# define a loss function (average prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# compute the average number of wrong predictions in the (training) data
# K-fold cross-validation
library(boot)
crossval1 <- cv.glm(data = alc, cost = loss_func, glmfit = model3, K = 10)
# average number of wrong predictions in the cross validation
crossval1$delta[1]
## [1] 0.2670157
Since some of the non-significant variables were already excluded from our final model here, we explored the same variables as in DataCamp and model reached around the same 0.26 error.
#Let's explore our previous models model 1 and model 2.
# K-fold cross-validation
#model 1
crossval2 <- cv.glm(data = alc, cost = loss_func, glmfit = model1, K = 10)
# average number of wrong predictions in the cross validation
crossval2$delta[1]
## [1] 0.2434555
# K-fold cross-validation
#model 2
crossval3 <- cv.glm(data = alc, cost = loss_func, glmfit = model2, K = 10)
# average number of wrong predictions in the cross validation
crossval3$delta[1]
## [1] 0.2486911
#############
#And let's try a new test model to explore other variable options internet, school, age and failures
model4 <- glm(high_use ~ internet+school+age+failures, data = alc, family = "binomial")
# K-fold cross-validation
#model 4
crossval4 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
# average number of wrong predictions in the cross validation
crossval4$delta[1]
## [1] 0.3062827
#Let's remove some of the variables one by one to see the effect on predictiosn
model5 <- glm(high_use ~ internet+age+failures, data = alc, family = "binomial")
# K-fold cross-validation
#model 4
crossval5 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
# average number of wrong predictions in the cross validation
crossval5$delta[1]
## [1] 0.3036649
#And let's remove more variables:
model6 <- glm(high_use ~ internet+failures, data = alc, family = "binomial")
# K-fold cross-validation
#model 4
crossval6 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
# average number of wrong predictions in the cross validation
crossval6$delta[1]
## [1] 0.3010471
#And let's try a new model to explore options:
model7 <- glm(high_use ~ internet, data = alc, family = "binomial")
# K-fold cross-validation
#model 4
crossval7 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
# average number of wrong predictions in the cross validation
crossval7$delta[1]
## [1] 0.2984293
#And let's try a new model to explore options:
model7 <- glm(high_use ~ failures, data = alc, family = "binomial")
# K-fold cross-validation
#model 4
crossval7 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
# average number of wrong predictions in the cross validation
crossval7$delta[1]
## [1] 0.3141361
Here we can see, that the test model 4 had increased prediction error (31 %) compared to previous model. Model 4 combined internet access, age, school and failures in our new test model. All the remaining attempt by removing one variable at the time did not decrease the prediction error.
Reading in the Boston dataset included in the MASS package and load in other needed libraries. Boston dataset contains Housing related data from Boston suburbs, and we explore the town crime rates with the explanatory variables from this dataset.
library(MASS)
library(tidyverse)
library(corrplot)
library(viridis)
library(ggplot2)
library(corrplot)
data(Boston)
Let’s explore the Boston dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim (Boston)
## [1] 506 14
Boston dataset has 506 observations and 14 different variables. More information on different variables is available here.
#Let's explore the Boston data with graphical overview and summaries of the variables
pairs(Boston)
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
From the summary we can see e.g. the mean crime rate in Boston suburbs is 3.6 crimes/capita. nox tells us the nitrogen oxide concentrations (ppm) with mean value in Boston of 0.6. dis tells us that the distance from the Boston employment centres ranges from 1.1 to 12.
#or exploration with corrplot
# First we need to calculate the correlation matrix, optionally print it and round it
cor_matrix<-cor(Boston)
#cor_matrix
roudedcormatrix<- cor_matrix %>% round(digits = 2)
#plot
corrplot(roudedcormatrix, type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
From the correlation plot we can see the positive and negative correlations with different variables and crime rate (crim). Crime rate has the highest positive correlation with variables rad (index of accessibility to radial highways) and tax (full-value property-tax rate per $10,000). On the opposite, lowest correlations are detected with medv (median value of owner-occupied homes in $1000s), black (1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town) and dis (weighted mean of distances to five Boston employment centres).
#Let's standardise our dataset and scale our data
Bostonscaled <- scale(Boston)
#Let's change our matrix into a dataframe
Bostonscaled<-as.data.frame(Bostonscaled)
#Check the class
class(Bostonscaled)
## [1] "data.frame"
#Let's check the summary
summary(Bostonscaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
#Let's create quantiles for the crime rates
crimequantiles<- quantile(Bostonscaled$crim)
crimequantiles
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
#and rename them with labelstring
labelstring<-c("low","med_low","med_high","high")
crime <- cut(Bostonscaled$crim, breaks = crimequantiles, include.lowest = TRUE, label = labelstring)
#Let's make changes into the dataset and remove original crim
Bostonscaled <- dplyr::select(Bostonscaled, -crim)
# add the new categorical value to scaled data
Bostonscaled <- data.frame(Bostonscaled, crime)
#Let's see the number of rows in our dataset
n <- nrow(Bostonscaled)
#We can choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# We can create train set
train <- Bostonscaled[ind,]
# We can create test set
test <- Bostonscaled[-ind,]
#We can now use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables (as dot in script)
lda.fit <- lda(crime ~.,data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2698020 0.2202970 0.2574257 0.2524752
##
## Group means:
## zn indus chas nox rm
## low 0.94776710 -0.8971760 -0.09172814 -0.8661508 0.41403330
## med_low -0.09773272 -0.2625464 0.08156758 -0.5620961 -0.13125022
## med_high -0.41220520 0.2703592 0.25766519 0.4240806 -0.00487157
## high -0.48724019 1.0171096 -0.04073494 1.0406599 -0.38642231
## age dis rad tax ptratio
## low -0.8731314 0.8574965 -0.6984422 -0.7588152 -0.40831234
## med_low -0.3444234 0.3681568 -0.5534543 -0.4905469 -0.06872784
## med_high 0.4573932 -0.4064705 -0.3999076 -0.2478149 -0.20286305
## high 0.8080500 -0.8529113 1.6382099 1.5141140 0.78087177
## black lstat medv
## low 0.38256024 -0.76416040 0.499383470
## med_low 0.35237238 -0.11838895 0.009138486
## med_high 0.08931233 0.08857418 0.056652597
## high -0.84222907 0.83994701 -0.633454131
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10049915 0.72763600 -0.93452410
## indus -0.06232910 -0.25446786 0.46592588
## chas -0.06856708 -0.09745969 0.11177269
## nox 0.35900138 -0.52823345 -1.38164587
## rm -0.10933789 -0.06468908 -0.08375578
## age 0.32618623 -0.34883089 -0.24539175
## dis -0.11348351 -0.22009469 0.31728047
## rad 2.91204021 1.07347198 0.18122290
## tax 0.11480221 -0.22548808 0.37179438
## ptratio 0.10464710 0.05358570 -0.31555558
## black -0.15215701 -0.01281596 0.17254010
## lstat 0.16466416 -0.22434855 0.48464256
## medv 0.14840845 -0.29710857 -0.12316117
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.944 0.043 0.013
From the summary of LDA we can see that LDA model explains over 90 % of the variance. In addition, we can see the different explanatory variable means, and we can check how they vary between crime rates, e.g. age, nox and dis show variation between low crime and high crime rate means.
# Let's create the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "color", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# Set target classes as numeric
classes <- as.numeric(train$crime)
# Now we can biplot the LDA results
p <- plot(lda.fit, dimen = 2, col = classes, pch = classes)
p + lda.arrows(lda.fit, col = classes, myscale = 1.5)
## integer(0)
Or We can also try ggplot2 and viridis packages for LDA biplot visualization Example script can be found here and more on data management prior to ggplotting can be found here
require(scales)
require(gridExtra)
#We can also try ggplot2 and viridis packages for LDA biplot visualization
r <- lda(formula = crime ~ .,
data = train,
prior = c(1,1,1,1)/4)
prop = r$svd^2/sum(r$svd^2)
##
lda <- lda(crime ~ .,data=train, prior = c(1,1,1,1)/4)
prop.lda = r$svd^2/sum(r$svd^2)
plda <- predict(object = lda, newdata = train)
dataset2 = data.frame(crime = train[,"crime"], lda = plda$x)
p1 <- ggplot(dataset2) + geom_point(aes(lda.LD1, lda.LD2, colour = crime), size = 2.5) +
labs(x = paste("LD1 (", percent(prop.lda[1]), ")", sep=""),
y = paste("LD2 (", percent(prop.lda[2]), ")", sep="")) +
scale_color_viridis(discrete = TRUE, option = "plasma") +
theme_bw()
p1
We can also try ordinations in ggord instead.
#library(devtools)
#install_github("fawda123/ggord")
library(ggord)
ord <- lda(crime ~ ., train, color = crime, prior = rep(1, 4)/4)
ggord(ord, train$crime, color = crime, arrow=0.6, txt=4, size=3)
From the plots we can see that only group clearly separating from other groups is the high crime rate group. In addition, we can see that the explanatory variable rad points out towards high crime rate and nox points out towards med_high crime rate, where as explanatory variable zn points out towards low crime rate. This means that rad, which was “index of accessibility to radial highways”, could almost solely separate the high crime rate grouping in this model.
library(dplyr)
# We can now save the new classes from test data
correct_classes <- test$crime
correct_classes
## [1] low med_low med_low med_high med_high med_high med_high
## [8] med_low med_low med_low med_low med_low med_low med_low
## [15] med_low low med_low low low med_low med_low
## [22] med_low med_low med_low med_low med_low med_low med_low
## [29] med_low med_low med_high med_high med_high med_high med_high
## [36] med_low low low low med_low low low
## [43] med_low low low low med_low med_high med_high
## [50] med_high med_high med_high med_high med_low med_low med_high
## [57] med_high med_high med_high med_high med_high med_low low
## [64] med_low med_low low med_low med_high med_low low
## [71] low low low med_low high high high
## [78] high high high high high high high
## [85] high high high high high high high
## [92] high high high high high high high
## [99] high med_low med_low med_low
## Levels: low med_low med_high high
# And we can remove the crime variable from test data
test <- dplyr::select(test, -crime)
#Predictions of classes with LDA model and test data
set.seed(123)
lda.pred <- predict(lda.fit, newdata = test)
# And we can then cross tabulate the results
predictions<-table(correct = correct_classes, predicted = lda.pred$class)
predictions
## predicted
## correct low med_low med_high high
## low 12 5 1 0
## med_low 7 18 12 0
## med_high 1 7 14 0
## high 0 0 0 25
Predictions with the test data show that 14 of low crime rate was correctly predicted as low and 5 falsely as med_low. In addition, 5 med_high crime rated was falsely predicted as low and 9 as med_high. From the high crime rate 31 was correctly predicted as high. The model seems to work well with the high crime rate predictions. However, there can be seen more false predictions for med_high, med_low and low crime rates.
NOTE! Despite the set.seed(), this table seems to change the values with every knit and I did not find a solution to this yet
#We can reload the Boston dataset
data(Boston)
#Let's standardise the dataset by scaling the variables,
#we need data.frame
Bostonscaled2 <- as.data.frame(Boston)
#Now we can use k-means clustering to calculate the distances
# k-means clustering
km <-kmeans(Bostonscaled2, centers = 4)
#km$cluster
#plot
pairs(Bostonscaled2[1:10], col = km$cluster)
####
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Bostonscaled2, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
From the qplot we can see that the optimal number of clusters is 2, since there is a radical rise.
#Let's set the number of centers to 2 and cluster again.
# k-means clustering
km2 <-kmeans(Bostonscaled2, centers = 2)
# plot the Boston dataset with clusters
pairs(Bostonscaled2[1:10], col = km2$cluster)
When we compare the clustering of crime vs explanatory variables, good grouping can be assumed e.g. in rad and tax. In comparison, nox and rm scatter more.
#Bonus
#Superbonus
This week we explore the human dataset, which can be found here
In addition, related metadata is available here and here.
This dataset contains information on Human development projects by UN containing key aspect statistics such as life expectancy (Life.Exp) and maternal mortality rate (Mat.Mor).
Let’s read in the data
human<-read.csv("~/Documents/MAIJA/R_IODS/IODS-project/data/human.csv", row.names = 1)
Let’s check basic structure, dimensions and summary first.
str(human)
## 'data.frame': 155 obs. of 8 variables:
## $ Edu2.FM : num 1.007 0.997 0.983 0.989 0.969 ...
## $ Labo.FM : num 0.891 0.819 0.825 0.884 0.829 ...
## $ Edu.Exp : num 17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
## $ Life.Exp : num 81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
## $ GNI : int 64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
## $ Mat.Mor : int 4 6 6 5 6 7 9 28 11 8 ...
## $ Ado.Birth: num 7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
## $ Parli.F : num 39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
dim(human)
## [1] 155 8
summary(human)
## Edu2.FM Labo.FM Edu.Exp Life.Exp
## Min. :0.1717 Min. :0.1857 Min. : 5.40 Min. :49.00
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:11.25 1st Qu.:66.30
## Median :0.9375 Median :0.7535 Median :13.50 Median :74.20
## Mean :0.8529 Mean :0.7074 Mean :13.18 Mean :71.65
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:15.20 3rd Qu.:77.25
## Max. :1.4967 Max. :1.0380 Max. :20.20 Max. :83.50
## GNI Mat.Mor Ado.Birth Parli.F
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
From this summary we can see that the Mat.Mor (maternal mortality rate) varies a lot between the countries with minimum value of 1 to up to 1100. The expected years of schooling ( Edu.Exp ) also varies a lot with mean of 13.18 years.
Then we can create a graphical overview of the data. This week we want to explore the data through dimension reduction techniques and we want to standardise our values. We can use package GGally for visualization with ggpairs.
library(GGally)
ggpairs(human)
Based on this visualization, we could evaluate that most variables of the human dataset does not seem to follow normal distribution. Here we can see that the highest positive correlation are between Life.Exp (Life expectancy at birth, years) and Edu.Exp (Expected years of schooling). In addition, high correlation is also detected between Mat.Mor (Maternal mortality rate) and Ado.Birth (Adolescent birth rate). In addition, there is a negative correlation between Life.Exp and Mat.Mor.
Let’s check also the summaries of the non-standardised data to explore the importance of different components.
pca_human <- prcomp(human)
summary_non <- summary(pca_human)
summary_non
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01 0.0001 0.00 0.00 0.000 0.000 0.0000
## Cumulative Proportion 9.999e-01 1.0000 1.00 1.00 1.000 1.000 1.0000
## PC8
## Standard deviation 0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion 1.0000
Let’s see how principal component analysis (PCA) looks without standardization.
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("grey30", "deeppink"))
#Let's add the percentages
# rounded percentages of variance captured by each PC
pca_pr<- round(100*summary_non$importance[2,], digits = 1)
# print out the percentages of variance
pca_pr
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 100 0 0 0 0 0 0 0
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey30", "deeppink"), xlab = pc_lab[1], ylab = pc_lab[2])
Here we can see the PCA biplot with non-standardized data. Without standardization the PC1 explains 100 % of the variability and all the variance seems to be explained by one variable, GNI (gross national income, per capita). As we can see from the summary, the other PC components are not explaining any part of the variability.
Now let’s repeat the PCA with standardised values. Let’s standardise the values with “scale”.
First, let’s check the summaries of the standardized data
human_std <- scale(human)
pca_human_std <- prcomp(human_std)
summary_std <- summary(pca_human_std)
summary_std
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion 0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
## PC7 PC8
## Standard deviation 0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion 0.98702 1.00000
#We want to add the percentages to the biplot
# rounded percentages of variance captured by each PC
pca_pr_std <- round(100*summary_std$importance[2,], digits = 1)
# print out the percentages of variance
pca_pr_std
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 53.6 16.2 9.6 7.6 5.5 3.6 2.6 1.3
# create object pc_lab to be used as axis labels
pc_lab_std <- paste0(names(pca_pr_std), " (", pca_pr_std, "%)")
# draw a biplot
biplot(pca_human_std, cex = c(0.8, 1), col = c("grey30", "deeppink"), xlab = pc_lab_std[1], ylab = pc_lab_std[2])
Now we can see that the results between standardised and non-standardised PCA look very different.
Here we can see that 53.6 % of the variability is explained by the PC1 and 16.2 % by the PC2. We can see in the standardised PC biplot that that Labo.FM (labour market participation rate by sex) and Parli.F (share of females in parliament, percent) influence mostly to PC2 whereas e.g. Mat.Mor and Ado.Birth influence on PC1. Most Nordic and European countries are located to upper left side of the PC plot as are variables such as higher Life.Exp and higher GNI.
Standardization allowed us to be able to observe differences between countries by other variables than only GNI.
Let’s load in another dataset: tea from the package FactoMineR and explore the structure and dimensions of this dataset.
library(FactoMineR)
#Let's load the tea-dataset
data(tea)
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300 36
ggpairs(tea[1:5])
There were 300 different observations and 36 different variables. Here we explored visually only first five of them and yet the visualization does not look very good or informative. So we should try something else. This time we want to explore this dataset with multiple correspondence analysis (MCA). Let’s see how that looks.
First, let’s pick only a limited number of variables and explore the structure of this dataset
library(ggplot2)
library(tidyr)
library(dplyr)
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "sex", "lunch")
# select the 'keep_columns' to create a new dataset
tea_time <- dplyr::select(tea, one_of(keep_columns))
# look at the summaries and structure of the data
summary(tea_time)
## Tea How how sugar
## black : 74 alone:195 tea bag :170 No.sugar:155
## Earl Grey:193 lemon: 33 tea bag+unpackaged: 94 sugar :145
## green : 33 milk : 63 unpackaged : 36
## other: 9
## where sex lunch
## chain store :192 F:178 lunch : 44
## chain store+tea shop: 78 M:122 Not.lunch:256
## tea shop : 30
##
str(tea_time)
## 'data.frame': 300 obs. of 7 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.241 0.230 0.191 0.176 0.155 0.141
## % of var. 14.073 13.405 11.145 10.285 9.038 8.235
## Cumulative % of var. 14.073 27.477 38.622 48.907 57.945 66.180
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11 Dim.12
## Variance 0.126 0.121 0.116 0.090 0.074 0.053
## % of var. 7.337 7.069 6.790 5.221 4.321 3.082
## Cumulative % of var. 73.517 80.586 87.376 92.597 96.918 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.102 0.014 0.010 | -0.498 0.359 0.227 | -0.095
## 2 | -0.227 0.071 0.036 | -0.107 0.017 0.008 | -0.557
## 3 | -0.356 0.175 0.211 | -0.206 0.061 0.070 | -0.382
## 4 | -0.319 0.141 0.139 | -0.534 0.413 0.390 | 0.285
## 5 | -0.223 0.069 0.070 | -0.403 0.236 0.228 | -0.172
## 6 | -0.223 0.069 0.070 | -0.403 0.236 0.228 | -0.172
## 7 | -0.223 0.069 0.070 | -0.403 0.236 0.228 | -0.172
## 8 | -0.227 0.071 0.036 | -0.107 0.017 0.008 | -0.557
## 9 | 0.028 0.001 0.000 | 0.667 0.646 0.262 | 0.198
## 10 | 0.334 0.154 0.070 | 0.641 0.597 0.257 | -0.384
## ctr cos2
## 1 0.016 0.008 |
## 2 0.542 0.219 |
## 3 0.254 0.242 |
## 4 0.141 0.111 |
## 5 0.052 0.042 |
## 6 0.052 0.042 |
## 7 0.052 0.042 |
## 8 0.542 0.219 |
## 9 0.069 0.023 |
## 10 0.258 0.092 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.459 3.076 0.069 4.541 | 0.176 0.474 0.010
## Earl Grey | -0.289 3.177 0.150 -6.706 | 0.054 0.118 0.005
## green | 0.660 2.836 0.054 4.011 | -0.712 3.469 0.063
## alone | 0.003 0.000 0.000 0.068 | -0.182 1.344 0.062
## lemon | 0.588 2.252 0.043 3.575 | 0.549 2.063 0.037
## milk | -0.301 1.128 0.024 -2.685 | 0.027 0.009 0.000
## other | -0.111 0.022 0.000 -0.337 | 1.751 5.716 0.095
## tea bag | -0.503 8.475 0.330 -9.937 | -0.491 8.501 0.316
## tea bag+unpackaged | 0.091 0.153 0.004 1.060 | 1.076 22.557 0.528
## unpackaged | 2.136 32.429 0.622 13.641 | -0.490 1.792 0.033
## v.test Dim.3 ctr cos2 v.test
## black 1.739 | -0.788 11.452 0.203 -7.797 |
## Earl Grey 1.263 | 0.372 6.662 0.250 8.642 |
## green -4.330 | -0.409 1.379 0.021 -2.489 |
## alone -4.298 | -0.275 3.682 0.141 -6.486 |
## lemon 3.339 | 1.384 15.750 0.237 8.412 |
## milk 0.239 | 0.347 1.891 0.032 3.094 |
## other 5.324 | -1.539 5.313 0.073 -4.680 |
## tea bag -9.714 | -0.053 0.119 0.004 -1.047 |
## tea bag+unpackaged 12.570 | 0.075 0.132 0.003 0.877 |
## unpackaged -3.129 | 0.054 0.026 0.000 0.345 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.153 0.065 0.261 |
## How | 0.057 0.147 0.356 |
## how | 0.693 0.528 0.004 |
## sugar | 0.027 0.048 0.487 |
## where | 0.704 0.657 0.044 |
## sex | 0.050 0.106 0.099 |
## lunch | 0.003 0.057 0.086 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")
With our chosen variables: MCA plot allows us to observe that unpacked tea is bought from actual tea shops whereas tea bags are more likely to be bough from chain stores. We can also see that use of sugar is more common with tea bags than with unpacked tea. We can also see that tea Earl Grey is grouped closely to the use of milk in tea. Black tea is grouped closer to no sugar and use of lemon in tea.